Juan/sc 10379/list return types in the list tests output#383
Juan/sc 10379/list return types in the list tests output#383johnwalz97 merged 13 commits intomainfrom
Conversation
…dation and metric functions
- Added support for inspecting return types to determine if they include figures or tables. - Introduced new type hints for Matplotlib and Plotly figures. - Updated the test listing function to include flags for presence of figures and tables in the output.
|
This looks very nice! I think outputting two flags makes the result clear and easy to interpret. However, I have a couple of comments on that:
|
Hmm, good points. So for any custom test that doesn't have return type annotations, we are not going to be able to tell what the outputs are. I can clarify the warning message to make it a little bit more clear what users should do. And I can set the column to "Unknown" like you mentioned. As far as adding a |
|
@johnwalz97 @juanmleng and @cachafla It will allow us to add more return types without disturbing |
That's not a bad idea. The only thing is it does make it a bit more difficult to filter the data frame down to only tests that have tables or only tests that have figures. but it would definitely make the table a bit more compact. The thing is right now, it's set up to only be able to tell if the test outputs a table or figure. Shouldn't be that difficult to add detection for the other types. It's just a question of if we want it. Let me know what you guys think. |
|
@johnwalz97 how does the user filter the list of tests for "show me tests that output figures"? |
Right now, they would have to programmatically filter the list that's returned if they don't use the pretty argument, or they would have to get the DataFrame from the Styler object and filter that. There isn't a parameter right now for filtering, but do you want to add that as part of this ticket? |
PR SummaryThis pull request introduces extensive updates across the codebase by adding explicit return type annotations and updating function signatures with appropriate type hints. The changes span multiple modules and directories including tests, ongoing monitoring, prompt validation, and unit metrics for both classification and regression. Key enhancements include:
These changes do not alter the existing logic or functionality. Instead, they serve to improve the code quality, ease future modifications, and facilitate better static type checking and code analysis. Test Suggestions
|
cachafla
left a comment
There was a problem hiding this comment.
I updated notebooks/how_to/explore_tests.ipynb. I suggest we merge this and address improvements later (like filtering or adding something to describe_test) since this PR makes changes to every test.


Pull Request Description
What and why?
This PR adds two columns to the list test output, specifically 'has tables' and 'has figures'. These columns let users know what to expect in the test output before they run it.
How to test
To test, open a notebook and run this snippet of code:
What needs special review?
Please let me know what you think about the way in which the information is presented, specifically via flags. We could combine the information on outputs into two columns.
Dependencies, breaking changes, and deployment notes
There should be no breaking or logical changes besides the additional columns since the majority of changes are just return type annotations.
Release notes
Added columns in the
list_tests()output to show what kind of artifacts each test in the Valid Mind library produces.Checklist